introspective variational autoencoder
IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis
We present a novel introspective variational autoencoder (IntroVAE) model for synthesizing high-resolution photographic images. IntroVAE is capable of self-evaluating the quality of its generated samples and improving itself accordingly. Its inference and generator models are jointly trained in an introspective way. On one hand, the generator is required to reconstruct the input images from the noisy outputs of the inference model as normal VAEs. On the other hand, the inference model is encouraged to classify between the generated and real samples while the generator tries to fool it as GANs. These two famous generative frameworks are integrated in a simple yet efficient single-stream architecture that can be trained in a single stage. IntroVAE preserves the advantages of VAEs, such as stable training and nice latent manifold. Unlike most other hybrid models of VAEs and GANs, IntroVAE requires no extra discriminators, because the inference model itself serves as a discriminator to distinguish between the generated and real samples. Experiments demonstrate that our method produces high-resolution photo-realistic images (e.g., CELEBA images at (1024^{2})), which are comparable to or better than the state-of-the-art GANs.
Reviews: IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis
Update: I raised my score by two points because the rebuttal and reviews/comments revealed more differences that I originally noticed with respect to the AGE work, in particular in terms of the use of the KL divergence as a discriminator per example, and because the authors promised to discuss the connection to AGE and potentially expand the experimental section. I remain concerned that the resulting model is not a variational auto-encoder anymore despite the naming of the model (but rather closer to a GAN where the discriminator is based on the KL divergence), and about the experimental section, which reveals that the method works well, but does not provide a rich analysis for the proposed improvements. Rather than using a separate discriminator network, the work proposes a learning objective which encourages the encoder to discriminate between real data and generated data: it guides the approximate posterior to be close to the prior in the real data case and far from the prior otherwise. The approach is illustrated on the task of synthesizing high-resolution images, trained on the CelebA-HQ dataset. First, high-quality image generation remains an important area of research, and as a result, the paper's topic is relevant to the community.
IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis
Huang, Huaibo, li, zhihang, He, Ran, Sun, Zhenan, Tan, Tieniu
We present a novel introspective variational autoencoder (IntroVAE) model for synthesizing high-resolution photographic images. IntroVAE is capable of self-evaluating the quality of its generated samples and improving itself accordingly. Its inference and generator models are jointly trained in an introspective way. On one hand, the generator is required to reconstruct the input images from the noisy outputs of the inference model as normal VAEs. On the other hand, the inference model is encouraged to classify between the generated and real samples while the generator tries to fool it as GANs.